18 research outputs found
On Mini-Batch Training with Varying Length Time Series
In real-world time series recognition applications, it is possible to have
data with varying length patterns. However, when using artificial neural
networks (ANN), it is standard practice to use fixed-sized mini-batches. To do
this, time series data with varying lengths are typically normalized so that
all the patterns are the same length. Normally, this is done using zero padding
or truncation without much consideration. We propose a novel method of
normalizing the lengths of the time series in a dataset by exploiting the
dynamic matching ability of Dynamic Time Warping (DTW). In this way, the time
series lengths in a dataset can be set to a fixed size while maintaining
features typical to the dataset. In the experiments, all 11 datasets with
varying length time series from the 2018 UCR Time Series Archive are used. We
evaluate the proposed method by comparing it with 18 other length normalization
methods on a Convolutional Neural Network (CNN), a Long-Short Term Memory
network (LSTM), and a Bidirectional LSTM (BLSTM).Comment: Accepted to ICASSP 202
On the Ability of a CNN to Realize Image-to-Image Language Conversion
The purpose of this paper is to reveal the ability that Convolutional Neural
Networks (CNN) have on the novel task of image-to-image language conversion. We
propose a new network to tackle this task by converting images of Korean Hangul
characters directly into images of the phonetic Latin character equivalent. The
conversion rules between Hangul and the phonetic symbols are not explicitly
provided. The results of the proposed network show that it is possible to
perform image-to-image language conversion. Moreover, it shows that it can
grasp the structural features of Hangul even from limited learning data. In
addition, it introduces a new network to use when the input and output have
significantly different features.Comment: Published at ICDAR 201